fear(allocator): introduce CapacityLimit for FixedSizedAllocator#17014
fear(allocator): introduce CapacityLimit for FixedSizedAllocator#17014
Conversation
…imiting Extract Condvar and max_count logic into a separate CapacityLimit struct. This ensures the Condvar is only allocated when max_count is Some, and makes the capacity limiting logic reusable and generic.
How to use the Graphite Merge QueueAdd either label to this PR to merge it via the merge queue:
You must have a Graphite account in order to use the merge queue. Sign up using this link. An organization admin has enabled the Graphite Merge Queue in this repository. Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue. This stack of pull requests is managed by Graphite. Learn more about stacking. |
CodSpeed Performance ReportMerging #17014 will not alter performanceComparing Summary
Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR introduces capacity limiting for FixedSizeAllocatorPool to address Windows-specific limitations where the number of fixed-sized allocators needs to be constrained below the thread count. The implementation adds a CapacityLimit struct with blocking/waiting behavior using condition variables.
Key changes:
- Adds
CapacityLimitstruct with condition variable-based blocking when capacity is reached - Updates
FixedSizeAllocatorPool::newto acceptmax_count: Option<u32>parameter - Implements waiting/notification mechanism for allocator availability
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
crates/oxc_linter/src/service/runtime.rs |
Updates call site to pass None for max_count, maintaining existing behavior |
crates/oxc_allocator/src/pool/mod.rs |
Updates public API to accept max_count parameter, adds documentation |
crates/oxc_allocator/src/pool/fixed_size.rs |
Implements core capacity limiting logic with CapacityLimit struct and blocking behavior |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Check if we're at maximum capacity | ||
| if let Some(capacity_limit) = &self.capacity_limit { | ||
| let current_count = self.next_id.load(Ordering::Relaxed); | ||
| if capacity_limit.is_at_capacity(current_count) { | ||
| // At maximum capacity - wait for an item to be returned | ||
| return capacity_limit.wait_for_item(allocators); | ||
| } | ||
| } | ||
|
|
||
| // Create a new allocator. | ||
| // Each allocator needs to have a unique ID, but the order those IDs are assigned in | ||
| // doesn't matter, so `Ordering::Relaxed` is fine. | ||
| let id = self.next_id.fetch_add(1, Ordering::Relaxed); | ||
| // Protect against IDs wrapping around. | ||
| assert!(id < u32::MAX, "Created too many allocators"); | ||
| FixedSizeAllocator::new(id) |
There was a problem hiding this comment.
There is a race condition between checking capacity and incrementing next_id. Multiple threads could simultaneously:
- Check that allocators.pop() returns None (line 110)
- Load current_count and see it's below max_count (line 116)
- All pass the capacity check (line 117)
- All increment next_id with fetch_add (line 126)
This allows the pool to create more allocators than max_count allows. The capacity check needs to happen atomically with the ID increment, or use a different synchronization approach.
| } | ||
|
|
||
| // Notify one waiting thread that an allocator is available (if capacity is limited) | ||
| if let Some(capacity_limit) = &self.capacity_limit { | ||
| capacity_limit.notify_available(); |
There was a problem hiding this comment.
The mutex guard is dropped before notifying waiting threads. This creates a window where a thread waiting in wait_for_item could be notified but another thread could acquire the lock first and pop the allocator, causing the notified thread to wait again unnecessarily. Consider moving the notify_available call inside the mutex-protected block, or document why this ordering is intentional.
| } | |
| // Notify one waiting thread that an allocator is available (if capacity is limited) | |
| if let Some(capacity_limit) = &self.capacity_limit { | |
| capacity_limit.notify_available(); | |
| // Notify one waiting thread that an allocator is available (if capacity is limited) | |
| if let Some(capacity_limit) = &self.capacity_limit { | |
| capacity_limit.notify_available(); | |
| } |
| /// This method blocks until an item is returned to the pool. | ||
| fn wait_for_item<T>(&self, mut items: std::sync::MutexGuard<'_, Vec<T>>) -> T { | ||
| loop { | ||
| items = self.available.wait(items).unwrap(); |
There was a problem hiding this comment.
The unwrap() in wait_for_item could panic if the mutex is poisoned, but this is not documented. The panic behavior should either be documented in the method's docstring or the error should be propagated to the caller.
| items = self.available.wait(items).unwrap(); | |
| items = match self.available.wait(items) { | |
| Ok(guard) => guard, | |
| Err(poisoned) => poisoned.into_inner(), | |
| }; |
| fn get_impl(&self) -> FixedSizeAllocator { | ||
| let mut allocators = self.allocators.lock().unwrap(); | ||
|
|
||
| // Try to get an allocator from the pool | ||
| if let Some(alloc) = allocators.pop() { | ||
| return alloc; | ||
| } | ||
|
|
||
| // Check if we're at maximum capacity | ||
| if let Some(capacity_limit) = &self.capacity_limit { | ||
| let current_count = self.next_id.load(Ordering::Relaxed); | ||
| if capacity_limit.is_at_capacity(current_count) { | ||
| // At maximum capacity - wait for an item to be returned | ||
| return capacity_limit.wait_for_item(allocators); | ||
| } | ||
| } | ||
|
|
||
| // Create a new allocator. | ||
| // Each allocator needs to have a unique ID, but the order those IDs are assigned in | ||
| // doesn't matter, so `Ordering::Relaxed` is fine. | ||
| let id = self.next_id.fetch_add(1, Ordering::Relaxed); | ||
| // Protect against IDs wrapping around. | ||
| assert!(id < u32::MAX, "Created too many allocators"); | ||
| FixedSizeAllocator::new(id) | ||
| } |
There was a problem hiding this comment.
The new capacity limiting behavior lacks test coverage. Consider adding tests to verify: (1) blocking behavior when capacity is reached, (2) correct unblocking when allocators are returned, (3) race condition handling when multiple threads try to acquire allocators simultaneously, and (4) correct behavior when max_count is None.
| FixedSizeAllocatorPool { | ||
| allocators: Mutex::new(allocators), | ||
| next_id: AtomicU32::new(0), | ||
| capacity_limit: max_count.map(CapacityLimit::new), |
There was a problem hiding this comment.
There is no validation that max_count is greater than zero. If max_count is Some(0), the pool will always block in wait_for_item and never be able to create any allocators, resulting in a deadlock. Consider adding validation to ensure max_count is at least 1 when it's Some.
| capacity_limit: max_count.map(CapacityLimit::new), | |
| capacity_limit: max_count.filter(|&count| count > 0).map(CapacityLimit::new), |
|
I've never used Hmm. Do we need this? What I vaguely had in mind after you suggested this approach is: WindowsWe want to create as many So I imagined that on Windows:
Actually, maybe need a Or maybe instead of blocking, could do what // Windows only
let fixed_size_allocator = loop {
// Try to get an allocator
let allocator = {
let mut allocators = self.allocators.lock().unwrap();
allocators.pop()
};
if let Some(allocator) = allocator {
break allocator;
}
// Do other work while waiting for an allocator to become available
rayon::yield_now();
};
// Got an allocator. Continue.Mac OS + LinuxNo need for any limit. As long as no more than [thread count] I believe you plan to only get a fixed-size allocator from the pool in |
|
Actually, on Windows, maybe want to create 1 less |
This is a problem to implement, if we consider the LSP case, we could be allocating a very large chunk of memory that won't be freed for the lifecycle of the user's editing session, which feels unacceptable. In a different PR, i am going to take the approach of create 1x allocator (bear min) in
Since it's an error case, I called it
See above for why I don't think we should do this
If we wanted to move the blocking logic into the linter, I think this would be OK, but importing rayon (where currently
👍 I will change the win/non-win logic in a diff PR |

For some windows machines, we need to limit the total number of fixed sized allocators to less than thread count, however it is OK to use all threads.
this PE introduces a limiter for this, and introduces atomic waits that wait until a new fixed sized allocator is available.